In [36]:
using ApproxFun, Plots, ComplexPhasePortrait, ApproxFun, SingularIntegralEquations,
SpecialFunctions
using SingularIntegralEquations.HypergeometricFunctions
gr();
Dr Sheehan Olver
s.olver@imperial.ac.uk
Office Hours: 3-4pm Mondays, 11-12am Thursdays, Huxley 6M40
Website: https://github.com/dlfivefifty/M3M6LectureNotes
A special function is a function that can't be expressed in closed form in terms of classical functions, like $\cos$, $\sin$. We've seen a few special functions so far: \begin{align*} \Ei z &= \int_{-\infty}^z {\E^\zeta \over \zeta} \D \zeta \\ \erfc z &= {2 \over \sqrt \pi} \int_z^\infty \E^{-\zeta^2} \D \zeta \\ \Gamma(\alpha, z) &= \int_z^\infty \zeta^{\alpha-1} \E^{-\zeta} \D\zeta. \end{align*} But we've also seen special functions in the form of orthogonal polynomials:
Most special functions solve simple ODEs involving very low order rational functions. For example, these three special functions satisfy second ODEs:
$u(z) = \E^{z} \Gamma(\alpha, z)$ satisfies \begin{align*} {\D u \over \dz} - u &= z^{\alpha-1} \qquad\Rightarrow \\ z {\D^2 u \over \dz^2} + (1- \alpha -z) {\D u \over \dz} + (\alpha-1)u &= 0 \end{align*}
Laguerre satisfies $$ x {\D^2 L_n^{(a)} \over \dx^2} + (a+1-x) {\D L_n^{(a)} \over \dx} + n L_n^{(a)} = 0 $$
A natural question becomes what is the relationship between the singularities of the variable coefficients and the singularities of the solutions?
Consider the solution of a first order ODE $$ {\D u\over \dz} = a(z) u\qqand u(z_0) = c $$ which we can write as $$ u(z) = c \E^{\int_{z_0}^z a(\zeta) \D \zeta} $$ That is, we can think of the solution as living on a contour, corresponding to the contour of integration of the integral.
Alternatively, we can think of the ODE as living on a contour $\gamma : (a,b) \rightarrow \C$, in the first order case we do the change of variables $v(t) = u(\gamma(t))$, the ODE is reduced to $$ {\D v \over \dt} = \gamma'(t) u'(\gamma(t)) = \gamma'(t) a(\gamma(t)) u(\gamma(t)) = \gamma'(t) a(\gamma(t)) v $$ Thus provided we can choose the contour to avoid the singularities of $a(z)$, we can define the solution, but the value of $u(z)$ can depend on the choice of contour.
Normally, the contour is taken as a straight line, so that poles in $a(z)$ can induce branch cuts in $u(z)$.
Example consider $$ {\D u\over \dz} = u \qqand u(0) = 1 $$ with solution $u(z) = \E^z$. Consider a contour like $\gamma(t) = (1+\I )t$. Then we have for $v(t) = u(\gamma(t)) = \E^{(1+\I )t}$ that $v$ satisfies the ODE $$ {\D v \over \dt} = (1+\I t) v \qqand v(0) = 1 $$
Example Now consider an ODE with a pole: $$ {\D u\over \dz} = {k u \over z} \qqand u(1) = 1 $$ with solution $u(z) = z^k$. Consider two different choices of contours: $\gamma_1(t) = \E^{\I t}$ and $ \gamma_2(t) = \E^{-\I t}$ for $0 \leq t \leq 2\pi$. For $v_1(t) = u(\gamma_1(t))$ we have the ODE: $$ {\D v_1\over \dt} = \I k v_1 \qqand v(0) = 1 $$ with solution $v_1(t) = \E^{\I k t}$ (and similarly $v_2(t) = \E^{- \I k t}$). Hence we have \begin{align*} u(1) &= u(\E^{2 \I \pi}) \questionequals v_1(2\pi) = \E^{2 \pi \I k} \\ u(1) &= u(\E^{-2 \I \pi}) \questionequals v_2(2\pi) = \E^{-2 \pi \I k} \end{align*}. When $k$ is not an integer, each of these is a different number.
This non-uniqueness means we think of solving an ODE in terms along a contour. In what sense is $u(z)$ analytic? Well, we can deduce that the radius of convergence of the solution $z_0$ is dictated by the radius of convergence of $a(z)$, that is, the closest singularity.
Theorem Suppose $a(z)$ is analytic in a disk of radius $R$. Then $u(z)$ is also analytic in a disk of radius $R$.
Sketch of proof We will show this using Taylor series (using operator notation). Note that if we represent (here we take $z_0 = 0$): $$ u(z) = u_0 + u_1 z+ u_2 z^2 + \cdots = (1,z,z^2,\ldots) \begin{pmatrix} u_0\\u_1\\u_2\\\vdots \end{pmatrix} $$ The derivative operator has a very nice simple form: $$ u'(z) = (1,z,z^2,\ldots) \begin{pmatrix} 0 & 1 \\ && 2 \\ &&&3 \\ &&&&\ddots \end{pmatrix} \begin{pmatrix} u_0\\u_1\\u_2 \\ \vdots \end{pmatrix} $$ On the other hand, multiplication by $z$ has the following operator form: $$ z u(z) = \begin{pmatrix} 0 \\ 1 \\ & 1 \\ &&1 \\ &&&\ddots \end{pmatrix} \begin{pmatrix} u_0\\u_1\\u_2 \\ \vdots \end{pmatrix} $$ Each time we multiply by $z$, this expression gets shift down. Thus multiplication by $$ a(z) = a_0 + a_1 z+ a_2 z^2 + \cdots $$ has the form $$ a(z) u(z) = \begin{pmatrix} a_0 \\ a_1 & a_0 \\ a_2 & a_1 & a_0 \\ a_3 & a_2 & a_1 & a_0 \\ \vdots &\ddots&\ddots&\ddots&\ddots \end{pmatrix} \begin{pmatrix} u_0\\u_1\\u_2 \\ \vdots \end{pmatrix} $$ Thus the ODE $u'(z) - a(z) u(z)= 0$ and $u(0) = c$ becomes: $$ \begin{pmatrix} 1 \\ -a_0 & 1 \\ -a_1 & -a_0 & 2 \\ -a_2 & -a_1 & -a_0 & 3 \\ -a_3 & -a_2 & -a_1 & -a_0 & 4 \\ \vdots &\ddots&\ddots&\ddots&\ddots & \ddots \end{pmatrix} \begin{pmatrix} u_0\\u_1\\u_2 \\ \vdots \end{pmatrix} = \begin{pmatrix} 1 \\ 0 \\\vdots \end{pmatrix} $$ This is solvable via forward substitution.
Assume that the radius of convergence of $a$ is $R$, that is, for every $r < R$ we have $|a_k| \leq {C(r) \over r^k}$ for some constant $C$. The worst case in the growth of $u_k$ is in the case every $a_k$ is positive, therefore, we have
$$
\left| \begin{pmatrix} u_0\\u_1\\u_2 \\
\vdots
\end{pmatrix} \right| \leq \begin{pmatrix} 1 \\ -C & 1 \\ -C r^{-1} & -C & 2 \\ -C r^{-2} & -C r^{-1} & -C & 3 \\ -C r^{-3} & -C r^{-2} & -C r^{-1} & -C & 4 \\ \vdots &\ddots&\ddots&\ddots&\ddots & \ddots \end{pmatrix}^{-1}\begin{pmatrix} 1 \\ 0 \\\vdots \end{pmatrix}
$$
That is, we can bound $|u_k| < w_k$ where $w_k$ solves
$$
\begin{pmatrix} 1 \\ -C & 1 \\ -C r^{-1} & -C & 2 \\ -C r^{-2} & -C r^{-1} & -C & 3 \\ -C r^{-3} & -C r^{-2} & -C r^{-1} & -C & 4 \\ \vdots &\ddots&\ddots&\ddots&\ddots & \ddots \end{pmatrix}\begin{pmatrix}w_0 \\ w_1 \\\vdots \end{pmatrix} = \begin{pmatrix} 1 \\ 0 \\\vdots \end{pmatrix}
$$
This can be observed exactly when we note that this is the ODE with $\tilde a(z)$ defined as
$$
\tilde a(z) = C\sum_{k=0} r^{-k} z^k = {C r \over r-z}
$$
This motivates multiplying the equation by $z-r$, or in coefficient space, by:
$$
\begin{pmatrix}
1 \\
-1 & r \\
&-1 & r \\
&&\ddots & \ddots
\end{pmatrix}
$$
which simplifies things:
$$
\begin{pmatrix} 1 \\ -1-Cr & r \\ & -1-Cr & 2r \\ & & -2-Cr & 3r \\ & & & -3-Cr & 4r \\ &&&&\ddots & \ddots \end{pmatrix}\begin{pmatrix}w_0 \\ w_1 \\\vdots \end{pmatrix} = \begin{pmatrix} 1 \\ -1 \\0 \\\vdots \end{pmatrix}
$$
Therefore we have
$$
w_k = r^{-1}(1 + C r/k) w_{k-1} = r^{-2}(1 + C r/k)(1 + C r/(k-1)) w_{k-2} = \cdots = r^{-k}(1 + C r/k) \cdots (1 + C r) w_1
$$
With a bit of work, it can be shown that the product is uniformly bounded, giving us $O(r^{-k})$ decay.
⬛️
Remark This proof can be adapted to the vector-valued case, which gives the equivalent result for $$ u''(z) + a(z) u'(z) + b(z) u(z) = 0 $$ that the radius of convergence is the smaller of radius convergence.
We know $u$ is analytic around $z_0$ with a non-zero radius. But given a curve, we can re-expand around another point inside the radius of convergence of the first point, to get analyticity in another circle:
In [25]:
γ = Arc(0.,1., (0,π))
p = plot(γ; label="contour")
scatter!([0.],[0.]; label="singularity of a")
r = 0.5
for k = linspace(0.,π,10)
plot!(Circle(exp(im*k),r); color=:green)
end
p
Out[25]:
In this sense, provided $a$ is analytic in a neighbourhood of $\gamma$, $u$ can be analytically continued along $\gamma$. Note as soon as this analytic continuation wraps back to itself we have no guarantee.